Week 9 - Tree-Based Methods

Dr. David Elliott

  1. Introduction

  2. General Decision Tree Algorithm

  3. Specific Decision Tree Algorithms

1. Introduction

Tree-based methods stratify or segment the predictor space into a number of simple regions1.

As the spliting rules to make these decision regions can be summerised in a tree structure, these approaches are called decision trees.

A decision tree can be thought of as breaking data down by asking a series of questions in order to categorise samples into the same class.

NOTES

"In the context of the different categories of machine learning algorithms that we defined at the beginning of this course, we may categorize decision trees as follows:

TODO


https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/06_trees/06-trees__notes.pdf

Terminology5

Root node: no incoming edge, zero, or more outgoing edges.

Internal node: one incoming edge, two (or more) outgoing edges.

Leaf node: each leaf node is assigned a class label if nodes are pure; otherwise, the class label is determined by majority vote.

Parent and child nodes: If a node is split, we refer to that given node as the parent node, and the resulting nodes are called child nodes.

Notes

Dataset Example: Penguins

The "palmer penguins" dataset2 contains data for 344 penguins from 3 different species and from 3 islands in the Palmer Archipelago, Antarctica.


Artwork by @allison_horst

2. General Decision Tree Algorithm

An algorithm starts at a tree root and then splits the data based on the features that give the best split based on a splitting criterion.

Generally this splitting procedure occours until3,4...

NOTES

Below is an example of a very shallow decision tree where we have set max_depth = 1.

Terminology (Reminder)1

Extra: dtreeviz

Heres an additional visualisation package with extra features such as bein able to follow the path of a hypothetical test sample.

I don't use dtreeviz in the lectures, as it can be a bit of a hassle to setup. However you may also find this a useful way of thinking about the splitting.

Notes

We can make the tree "deeper", and therefore more complex, by setting the max_depth = 3.

We could also use more than 2 features as seen below.

NOTES

We could also also easily extend this to have more than a 2 (binary) class labels.

In a general sense this approach is pretty simple, however there are a number of design choices and considerations we have to make including5:

3. Specific Decision Tree Algorithms

Most decision tree algorithms address the following implimentation choices differently5:

There are a number of decision tree algorithms, prominant ones include:

Notes

CART

Scikit-Learn uses an optimised version of the Classification And Regression Tree (CART) algorithm.

Notes

Information Gain4

An algorithm starts at a tree root and then splits the data based on the feature, $f$, that gives the largest information gain, $IG$.

To split using information gain relies on calculating the difference between an impurity measure of a parent node, $D_p$, and the impurities of its child nodes, $D_j$; information gain being high when the sum of the impurity of the child nodes is low.

We can maximise the information gain at each split using,

$$IG(D_p,f) = I(D_p)-\sum^m_{j=1}\frac{N_j}{N_p}I(D_j),$$

where $I$ is out impurity measure, $N_p$ is the total number of samples at the parent node, and $N_j$ is the number of samples in the $j$th child node.

Some algorithms, such as Scikit-learn's implimentation of CART, reduce the potential search space by implimenting binary trees:

$$IG(D_p,f) = I(D_p) - \frac{N_\text{left}}{N_p}I(D_\text{left})-\frac{N_\text{right}}{N_p}I(D_\text{right}).$$

Three impurity measures that are commonly used in binary decision trees are the classification error ($I_E$), gini impurity ($I_G$), and entropy ($I_H$)4.

Classification Error4

This is simply the fraction of the training observations in a region that does belongs to the most common class:

$$I_E = 1 - \max\left\{p(i|t)\right\}$$

Here $p(i|t)$ is the proportion of the samples that belong to the $i$th class $c$, for node $t$.

Notes

Entropy Impurity4

For all non-empty classes ($p(i|t) \neq 0$), entropy is given by

$$I_H=-\sum^c_{i=1}p(i|t)\log_2p(i|t).$$

The entropy is therefore 0 if all samples at the node belong to the same class and maximal if we have a uniform class distribution.

For example in binary classification ($c=2$):

Notes

Gini Impurity4

Gini Impurity is an alternative measurement, which minimises the probabilty of misclassification,

$$ \begin{align} I_G(t) &= \sum^c_{i=1}p(i|t)(1-p(i|t)) \\ &= 1-\sum^c_{i=1}p(i|t)^2. \end{align} $$

This measure is also maximal when classes are perfectly mixed (e.g. $c=2$):

$$ \begin{align} I_G(t) &= 1 - \sum^c_{i=1}0.5^2 = 0.5. \end{align} $$

Notes

Why not Classification Error?

Classification Error is rarely used for information gain in practice.

This is because it can mean that tree growth gets stuck and error doesnt improve, this is not the case for a concave function such as entropy or gini.

Notes

Feature Importance10,11

Decision trees allow us assess the importance of each feature for classifying the data,

$$ fi_j = \frac{\sum_{t \in s} ni_t}{\sum^m_t ni_t} $$

where $ni_t$ is the $t$th nodes importance, and $s$ are the indices of nodes that split on feature $fi_j$.

We often assess the normalized total reduction of the criterion (e.g. Gini) brought by that feature,

$$ normfi_j = \frac{fi_j}{\sum^p_j fi_j}. $$

Pruning

Question: When do we stop growing a tree?

Occam’s razor: Favor a simpler hypothesis, because a simpler hypothesis that fits the data equally well is more likely or plausible than a complex one5.

To minimize overfitting, we can either set limits on the trees before building them (pre-pruning), or reduce the tree by removing branches that do not significantly contribute (post-pruning).

NOTES

Dataset Example: Breast Cancer Wisconsin Dataset12

Digitized image of a fine needle aspirate of a breast mass. The features describe characteristics of the cell nuclei present in the image.

The dataset was created from digitized images of healthy (benign) and cancerous (malignant) tissues.


Image from Levenson et al. (2015), PLOS ONE, doi:10.1371/journal.pone.0141357.

Extra

You can explore the data below although I recomend limiting the number of features to plot for ease of viewing.

Notes

Pre-Pruning

An a priori limit on nodes, or tree depth, is often set to avoid overfitting due to a deep tree4,5.

Notes

We could also set a minimum number of data points for each node5.

Post-Pruning

In general, post-pruning consists of going back through the tree once it has been created and removing branches that do not significantly contribute to the error reduction and replacing them with leaf nodes6

Two common approaches are reduced-error pruning and cost-complexity pruning

Cost-complexity pruning7

Using Scikit-learn, we can recursively fit a complex tree with no prior pruning and have a look at the effective alphas and the corresponding total leaf impurities at each step of the pruning process.

As alpha increases, more of the tree is pruned, thus creating a decision tree that generalizes better.

We can select the alpha that reduces the distance between the train and validation scores.

Notes

Then we can train a decision tree using the chosen effective alpha.

Other Algorithms

ID3 - Iterative Dichotomizer 38

C4.59

Notes

Associated Exercises

Now might be a good time to try exercises 1-3.

References

  1. James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An introduction to statistical learning. Vol. 112. New York: springer, 2013.
  2. Gorman KB, Williams TD, Fraser WR (2014). Ecological sexual dimorphism and environmental variability within a community of Antarctic penguins (genus Pygoscelis). PLoS ONE 9(3):e90081. https://doi.org/10.1371/journal.pone.0090081
  3. Géron, A. (2017). Hands-on machine learning with Scikit-Learn and TensorFlow: concepts, tools, and techniques to build intelligent systems. " O'Reilly Media, Inc.".
  4. Raschka, Sebastian, and Vahid Mirjalili. Python Machine Learning, 2nd Ed. Packt Publishing, 2017.
  5. https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/06_trees/06-trees__notes.pdf
  6. Burkov, A. (2019). The hundred-page machine learning book (Vol. 1). Canada: Andriy Burkov.
  7. https://scikit-learn.org/stable/auto_examples/tree/plot_cost_complexity_pruning.html#:~:text=Cost%20complexity%20pruning%20provides%20another,the%20number%20of%20nodes%20pruned.
  8. Quinlan, J. R. (1986). Induction of decision trees. Machine learning, 1 (1), 81-106
  9. Quinlan, J. R. (1993). C4. 5: Programming for machine learning. Morgan Kauffmann, 38, 48.
  10. https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier
  11. https://towardsdatascience.com/the-mathematics-of-decision-trees-random-forest-and-feature-importance-in-scikit-learn-and-spark-f2861df67e3#:~:text=Feature%20importance%20is%20calculated%20as,the%20more%20important%20the%20feature.
  12. https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)